Goto

Collaborating Authors

 strong influence


Tesla Shareholders Approve Elon Musk's 1 Trillion Pay Package

WIRED

The unprecedented payday will go into full effect by 2035--as long as Tesla hits ambitious financial and production targets. On Thursday, Tesla shareholders approved an unprecedented $1 trillion pay package for CEO Elon Musk . The full compensation plan will go into effect by 2035--assuming the company successfully hits ambitious financial and production targets. If that happens, Musk will also get control of some 25 percent of the business, up from the 12 percent he controls currently. More than 75 percent of Tesla shareholders approved the move in a preliminary vote.


Elon Musk Wants 'Strong Influence' Over the 'Robot Army' He's Building

WIRED

Tesla might be an electric auto maker, but CEO Elon Musk has made clear that he thinks of it as much more: an innovator in artificial intelligence and software, a builder of world-shaking robots. He's also argued that Tesla should be worth a lot more than it is today: up to $20 trillion, he posted in July, more than five times the current worth of Nvidia. Musk has also made it clear that he wants to get paid, a lot. In November, Tesla shareholders will vote on the board's proposal to pay the CEO a remarkable $1 trillion over the next decade . The deal would also increase Musk's stake in Tesla from 13 percent to a quarter.


Analyzing the Influence of Training Samples on Explanations

Artelt, André, Hammer, Barbara

arXiv.org Artificial Intelligence

EXplainable AI (XAI) constitutes a popular method to analyze the reasoning of AI systems by explaining their decision-making, e.g. providing a counterfactual explanation of how to achieve recourse. However, in cases such as unexpected explanations, the user might be interested in learning about the cause of this explanation -- e.g. properties of the utilized training data that are responsible for the observed explanation. Under the umbrella of data valuation, first approaches have been proposed that estimate the influence of data samples on a given model. In this work, we take a slightly different stance, as we are interested in the influence of single samples on a model explanation rather than the model itself. Hence, we propose the novel problem of identifying training data samples that have a high influence on a given explanation (or related quantity) and investigate the particular case of differences in the cost of the recourse between protected groups. For this, we propose an algorithm that identifies such influential training samples.